search time
A Related Work Neural Architecture Search (NAS) was introduced to ease the process of manually designing complex
However, existing MP-NAS methods face architectural limitations. These limitations hinder MP-NAS usage in SOT A search spaces, leaving the challenge of swiftly designing effective large models unresolved. Accuracy is the result of the network training on ImageNet for 200 epochs. An accuracy prediction model that operates without FLOPs information. Table 2 illustrates the outcomes of these models.
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.67)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Search (0.55)
- Information Technology > Artificial Intelligence > Systems & Languages > Problem-Independent Architectures (0.42)
- Information Technology > Artificial Intelligence > Cognitive Science > Problem Solving (0.41)
Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions
One of the most important AI research questions is to trade off computation versus performance since ``perfect rationality exists in theory but is impossible to achieve in practice. Recently, Monte-Carlo tree search (MCTS) has attracted considerable attention due to the significant performance improvement in various challenging domains. However, the expensive time cost during search severely restricts its scope for applications. This paper proposes the Virtual MCTS (V-MCTS), a variant of MCTS that spends more search time on harder states and less search time on simpler states adaptively. We give theoretical bounds of the proposed method and evaluate the performance and computations on $9 \times 9$ Go board games and Atari games. Experiments show that our method can achieve comparable performances to the original search algorithm while requiring less than $50\%$ search time on average. We believe that this approach is a viable alternative for tasks under limited time and resources.
A $1000\times$ Faster LLM-enhanced Algorithm For Path Planning in Large-scale Grid Maps
Zeng, Junlin, Zhang, Xin, Zhao, Xiang, Pan, Yan
Path planning in grid maps, arising from various applications, has garnered significant attention. Existing methods, such as A*, Dijkstra, and their variants, work well for small-scale maps but fail to address large-scale ones due to high search time and memory consumption. Recently, Large Language Models (LLMs) have shown remarkable performance in path planning but still suffer from spatial illusion and poor planning performance. Among all the works, LLM-A* \cite{meng2024llm} leverages LLM to generate a series of waypoints and then uses A* to plan the paths between the neighboring waypoints. In this way, the complete path is constructed. However, LLM-A* still suffers from high computational time for large-scale maps. To fill this gap, we conducted a deep investigation into LLM-A* and found its bottleneck, resulting in limited performance. Accordingly, we design an innovative LLM-enhanced algorithm, abbr. as iLLM-A*. iLLM-A* includes 3 carefully designed mechanisms, including the optimization of A*, an incremental learning method for LLM to generate high-quality waypoints, and the selection of the appropriate waypoints for A* for path planning. Finally, a comprehensive evaluation on various grid maps shows that, compared with LLM-A*, iLLM-A* \textbf{1) achieves more than $1000\times$ speedup on average, and up to $2349.5\times$ speedup in the extreme case, 2) saves up to $58.6\%$ of the memory cost, 3) achieves both obviously shorter path length and lower path length standard deviation.}
- Asia > Middle East > Republic of Türkiye > Karaman Province > Karaman (0.04)
- Asia > China > Hunan Province > Changsha (0.04)
Reducing Street Parking Search Time via Smart Assignment Strategies
Hemmatpour, Behafarid, Dogani, Javad, Laoutaris, Nikolaos
In dense metropolitan areas, searching for street parking adds to traffic congestion. Like many other problems, real-time assistants based on mobile phones have been proposed, but their effectiveness is understudied. This work quantifies how varying levels of user coordination and information availability through such apps impact search time and the probability of finding street parking. Through a data-driven simulation of Madrid's street parking ecosystem, we analyze four distinct strategies: uncoordinated search (Unc-Agn), coordinated parking without awareness of non-users (Cord-Agn), an idealized oracle system that knows the positions of all non-users (Cord-Oracle), and our novel/practical Cord-Approx strategy that estimates non-users' behavior probabilistically. The Cord-Approx strategy, instead of requiring knowledge of how close non-users are to a certain spot in order to decide whether to navigate toward it, uses past occupancy distributions to elongate physical distances between system users and alternative parking spots, and then solves a Hungarian matching problem to dispatch accordingly. In high-fidelity simulations of Madrid's parking network with real traffic data, users of Cord-Approx averaged 6.69 minutes to find parking, compared to 19.98 minutes for non-users without an app. A zone-level snapshot shows that Cord-Approx reduces search time for system users by 72% (range = 67-76%) in central hubs, and up to 73% in residential areas, relative to non-users.
- Europe > Spain > Galicia > Madrid (0.46)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- (4 more...)
- Transportation > Infrastructure & Services (1.00)
- Transportation > Ground > Road (1.00)
- Government > Regional Government > North America Government > United States Government (0.46)
e1c13a13fc6b87616b787b986f98a111-Supplemental.pdf
This section gives the worst-case time analysis for Algorithm 1. This gives the bound shown in Eq. 3. B.1 Loss function space L Recall that the loss function search space is defined as: (Loss Function Search Space) L::= targeted Loss, n with Z | untargeted Loss with Z | targeted Loss, n - untargeted Loss with Z Z::= logits | probs To refer to different settings, we use the following notation: U: for the untargeted loss, T: for the targeted loss, D: for the targeted untargeted loss L: for using logits, and P: for using probs Effectively, the search space includes all the possible combinations expect that the cross-entropy loss supports only probability. B.2 Attack Algorithm & Parameters Space S Recall the attack space defined as: S::= S; S | randomize S | EOT S, n | repeat S, n | try S for n | Attack with params with loss L randomize The type of every parameter is either integer or float. Generic parameters and the supported loss for each attack algorithm are defined in Table 4. B.3 Search space conditioned on network property Following Stutz et al. (2020), we use the robust test error (Rerr) metric We define robust accuracy as 1 Rerr. Note however that Rerr defined in Eq. 5 has intractable maximization problem in the denominator, Note that we use a zero knowledge detector model, so none of the attacks in the search space are aware of the detector.